Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Transl Vis Sci Technol ; 13(4): 18, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38607633

RESUMO

Purpose: To investigate the visualization capabilities of high-speed swept-source optical coherence tomography (SS-OCT) in cataract surgery. Methods: Cataract surgery was simulated in wet labs with ex vivo porcine eyes. Each phase of the surgery was visualized with a novel surgical microscope-integrated SS-OCT with a variable imaging speed of over 1 million A-scans per second. It was designed to provide four-dimensional (4D) live-volumetric videos, live B-scans, and volume capture scans. Results: Four-dimensional videos, B-scans, and volume capture scans of corneal incision, ophthalmic viscosurgical device injection, capsulorrhexis, phacoemulsification, intraocular lens (IOL) injection, and position of unfolded IOL in the capsular bag were recorded. The flexibility of the SS-OCT system allowed us to tailor the scanning parameters to meet the specific demands of dynamic surgical steps and static pauses. The entire length of the eye was recorded in a single scan, and unfolding of the IOL was visualized dynamically. Conclusions: The presented novel visualization method for fast ophthalmic surgical microscope-integrated intraoperative OCT imaging in cataract surgery allowed the visualization of all major steps of the procedure by achieving large imaging depths covering the entire eye and high acquisition speeds enabling live volumetric 4D-OCT imaging. This promising technology may become an integral part of routine and advanced robotic-assisted cataract surgery in the future. Translational Relevance: We demonstrate the visualization capabilities of a cutting edge swept-source OCT system integrated into an ophthalmic surgical microscope during cataract surgery.


Assuntos
Catarata , Lentes Intraoculares , Oftalmologia , Suínos , Animais , Tomografia de Coerência Óptica , Olho
2.
J Robot Surg ; 17(6): 2735-2742, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37670151

RESUMO

The purpose of this study is to compare robot-assisted and manual subretinal injections in terms of successful subretinal blistering, reflux incidences and damage of the retinal pigment epithelium (RPE). Subretinal injection was simulated on 84 ex-vivo porcine eyes with half of the interventions being carried out manually and the other half by controlling a custom-built robot in a master-slave fashion. After pars plana vitrectomy (PPV), the retinal target spot was determined under a LUMERA 700 microscope with microscope-integrated intraoperative optical coherence tomography (iOCT) RESCAN 700 (Carl Zeiss Meditec, Germany). For injection, a 1 ml syringe filled with perfluorocarbon liquid (PFCL) was tipped with a 40-gauge metal cannula (Incyto Co., Ltd., South Korea). In one set of trials, the needle was attached to the robot's end joint and maneuvered robotically to the retinal target site. In another set of trials, approaching the retina was performed manually. Intraretinal cannula-tip depth was monitored continuously via iOCT. At sufficient depth, PFCL was injected into the subretinal space. iOCT images and fundus video recordings were used to evaluate the surgical outcome. Robotic injections showed more often successful subretinal blistering (73.7% vs. 61.8%, p > 0.05) and a significantly lower incidence of reflux (23.7% vs. 58.8%, p < 0.01). Although larger tip depths were achieved in successful manual trials, RPE penetration occurred in 10.5% of robotic but in 26.5% of manual cases (p > 0.05). In conclusion, significantly less reflux incidences were achieved with the use of a robot. Furthermore, RPE penetrations occurred less and successful blistering more frequently when performing robotic surgery.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Animais , Suínos , Tomografia de Coerência Óptica/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Retina , Vitrectomia/métodos
3.
Micromachines (Basel) ; 14(6)2023 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-37374846

RESUMO

This study aimed to compare the efficacy of robot-assisted and manual cannula insertion in simulated big-bubble deep anterior lamellar keratoplasty (DALK). Novice surgeons with no prior experience in performing DALK were trained to perform the procedure using manual or robot-assisted techniques. The results showed that both methods could generate an airtight tunnel in the porcine cornea, and result in successful generation of a deep stromal demarcation plane representing sufficient depth reached for big-bubble generation in most cases. However, the combination of intraoperative OCT and robotic assistance received a significant increase in the depth of achieved detachment in non-perforated cases, comprising a mean of 89% as opposed to 85% of the cornea in manual trials. This research suggests that robot-assisted DALK may offer certain advantages over manual techniques, particularly when used in conjunction with intraoperative OCT.

4.
Biomed Opt Express ; 14(2): 846-865, 2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36874504

RESUMO

Intraoperative optical coherence tomography is still not overly pervasive in routine ophthalmic surgery, despite evident clinical benefits. That is because today's spectral-domain optical coherence tomography systems lack flexibility, acquisition speed, and imaging depth. We present to the best of our knowledge the most flexible swept-source optical coherence tomography (SS-OCT) engine coupled to an ophthalmic surgical microscope that operates at MHz A-scan rates. We use a MEMS tunable VCSEL to implement application-specific imaging modes, enabling diagnostic and documentary capture scans, live B-scan visualizations, and real-time 4D-OCT renderings. The technical design and implementation of the SS-OCT engine, as well as the reconstruction and rendering platform, are presented. All imaging modes are evaluated in surgical mock maneuvers using ex vivo bovine and porcine eye models. The applicability and limitations of MHz SS-OCT as a visualization tool for ophthalmic surgery are discussed.

5.
Int J Comput Assist Radiol Surg ; 15(5): 781-789, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32242299

RESUMO

PURPOSE: Intraoperative optical coherence tomography (iOCT) was recently introduced as a new modality for ophthalmic surgeries. It provides real-time cross-sectional information at a very high resolution. However, properly positioning the scan location during surgery is cumbersome and time-consuming, as a surgeon needs both his hands for surgery. The goal of the present study is to present a method to automatically position an iOCT scan on an anatomy of interest in the context of anterior segment surgeries. METHODS: First, a voice recognition algorithm using a context-free grammar is used to obtain the desired pose from the surgeon. Then, the limbus circle is detected in the microscope image and the iOCT scan is placed accordingly in the X-Y plane. Next, an iOCT sweep in Z direction is conducted and the scan is placed to centre the topmost structure. Finally, the position is fine-tuned using semantic segmentation and a rule-based system. RESULTS: The logic to position the scan location on various anatomies was evaluated on ex vivo porcine eyes (10 eyes for corneal apex and 7 eyes for cornea, sclera and iris). The mean euclidean distances (± standard deviation) was 76.7 (± 59.2) pixels and 0.298 (± 0.229) mm. The mean execution time (± standard deviation) in seconds for the four anatomies was 15 (± 1.2). The scans have a size of 1024 by 1024 pixels. The method was implemented on a Carl Zeiss OPMI LUMERA 700 with RESCAN 700. CONCLUSION: The present study introduces a method to fully automatically position an iOCT scanner. Providing the possibility of changing the OCT scan location via voice commands removes the burden of manual device manipulation from surgeons. This in turn allows them to keep their focus on the surgical task at hand and therefore increase the acceptance of iOCT in the operating room.


Assuntos
Monitorização Intraoperatória/métodos , Procedimentos Cirúrgicos Oftalmológicos/métodos , Tomografia de Coerência Óptica/métodos , Algoritmos , Animais , Estudos Transversais , Olho/diagnóstico por imagem , Microscopia/instrumentação , Procedimentos Cirúrgicos Oftalmológicos/instrumentação , Suínos
6.
Int J Comput Assist Radiol Surg ; 13(9): 1345-1355, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30054775

RESUMO

PURPOSE: Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid alarm fatigue. METHODS: In this work, without loss of generality to other procedures, we propose a method for aural augmentation of medical procedures via automatic modification of musical pieces. RESULTS: Evaluations of this concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method. CONCLUSION: In this paper, we proposed a novel sonification method for automatic musical augmentation of tasks within surgical procedures. Our experimental results suggest that these augmentations are aesthetically pleasing and have the potential to successfully convey useful information. This work opens a path for advanced sonification techniques in the operating room, in order to complement traditional visual displays and convey information more efficiently.


Assuntos
Algoritmos , Recursos Audiovisuais , Retroalimentação Sensorial , Som , Cirurgia Assistida por Computador/métodos , Cirurgia Vitreorretiniana/métodos , Humanos
7.
IEEE Trans Vis Comput Graph ; 23(11): 2366-2371, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28809687

RESUMO

Sonic interaction as a technique for conveying information has advantages over conventional visual augmented reality methods specially when augmenting the visual field with extra information brings distraction. Sonification of knowledge extracted by applying computational methods to sensory data is a well-established concept. However, some aspects of sonic interaction design such as aesthetics, the cognitive effort required for perceiving information, and avoiding alarm fatigue are not well studied in literature. In this work, we present a sonification scheme based on employment of physical modeling sound synthesis which targets focus demanding tasks requiring extreme precision. Proposed mapping techniques are designed to require minimum training for users to adapt to and minimum mental effort to interpret the conveyed information. Two experiments are conducted to assess the feasibility of the proposed method and compare it against visual augmented reality in high precision tasks. The observed quantitative results suggest that utilizing sound patches generated by physical modeling achieve the desired goal of improving the user experience and general task performance with minimal training.


Assuntos
Retroalimentação Sensorial/fisiologia , Modelos Neurológicos , Desempenho Psicomotor/fisiologia , Realidade Virtual , Gráficos por Computador , Humanos , Software
8.
Comput Math Methods Med ; 2016: 1067509, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27867418

RESUMO

Detection of instrument tip in retinal microsurgery videos is extremely challenging due to rapid motion, illumination changes, the cluttered background, and the deformable shape of the instrument. For the same reason, frequent failures in tracking add the overhead of reinitialization of the tracking. In this work, a new method is proposed to localize not only the instrument center point but also its tips and orientation without the need of manual reinitialization. Our approach models the instrument as a Conditional Random Field (CRF) where each part of the instrument is detected separately. The relations between these parts are modeled to capture the translation, rotation, and the scale changes of the instrument. The tracking is done via separate detection of instrument parts and evaluation of confidence via the modeled dependence functions. In case of low confidence feedback an automatic recovery process is performed. The algorithm is evaluated on in vivo ophthalmic surgery datasets and its performance is comparable to the state-of-the-art methods with the advantage that no manual reinitialization is needed.


Assuntos
Microcirurgia/métodos , Retina/cirurgia , Cirurgia Assistida por Computador/métodos , Algoritmos , Inteligência Artificial , Bases de Dados Factuais , Desenho de Equipamento , Humanos , Laparoscopia/métodos , Modelos Estatísticos , Procedimentos Cirúrgicos Oftalmológicos/instrumentação , Procedimentos Cirúrgicos Oftalmológicos/métodos , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Software , Instrumentos Cirúrgicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...